171 research outputs found
Multimodal Deep Learning for Robust RGB-D Object Recognition
Robust object recognition is a crucial ingredient of many, if not all,
real-world robotics applications. This paper leverages recent progress on
Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture
for object recognition. Our architecture is composed of two separate CNN
processing streams - one for each modality - which are consecutively combined
with a late fusion network. We focus on learning with imperfect sensor data, a
typical problem in real-world robotics tasks. For accurate learning, we
introduce a multi-stage training methodology and two crucial ingredients for
handling depth data with CNNs. The first, an effective encoding of depth
information for CNNs that enables learning without the need for large depth
datasets. The second, a data augmentation scheme for robust learning with depth
images by corrupting them with realistic noise patterns. We present
state-of-the-art results on the RGB-D object dataset and show recognition in
challenging RGB-D real-world noisy settings.Comment: Final version submitted to IROS'2015, results unchanged,
reformulation of some text passages in abstract and introductio
Learning and Transfer of Modulated Locomotor Controllers
We study a novel architecture and training procedure for locomotion tasks. A
high-frequency, low-level "spinal" network with access to proprioceptive
sensors learns sensorimotor primitives by training on simple tasks. This
pre-trained module is fixed and connected to a low-frequency, high-level
"cortical" network, with access to all sensors, which drives behavior by
modulating the inputs to the spinal network. Where a monolithic end-to-end
architecture fails completely, learning with a pre-trained spinal module
succeeds at multiple high-level tasks, and enables the effective exploration
required to learn from sparse rewards. We test our proposed architecture on
three simulated bodies: a 16-dimensional swimming snake, a 20-dimensional
quadruped, and a 54-dimensional humanoid. Our results are illustrated in the
accompanying video at https://youtu.be/sboPYvhpraQComment: Supplemental video available at https://youtu.be/sboPYvhpra
Playing Atari with Deep Reinforcement Learning
We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.Comment: NIPS Deep Learning Workshop 201
06251 Abstracts Collection -- Multi-Robot Systems: Perception, Behaviors, Learning, and Action
From 19.06.06 to 23.06.06, the Dagstuhl Seminar 06251 ``Multi-Robot Systems: Perception, Behaviors, Learning, and Action\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …